53 research outputs found

    Automating the production of communicative gestures in embodied characters

    Get PDF
    In this paper we highlight the different challenges in modeling communicative gestures for Embodied Conversational Agents (ECAs). We describe models whose aim is to capture and understand the specific characteristics of communicative gestures in order to envision how an automatic communicative gesture production mechanism could be built. The work is inspired by research on how human gesture characteristics (e.g., shape of the hand, movement, orientation and timing with respect to the speech) convey meaning. We present approaches to computing where to place a gesture, which shape the gesture takes and how gesture shapes evolve through time. We focus on a particular model based on theoretical frameworks on metaphors and embodied cognition that argue that people can represent, reason about and convey abstract concepts using physical representations and processes, which can be conveyed through physical gestures

    Autonomous agents and avatars in REVERIE’s virtual environment

    Get PDF
    In this paper, we describe the enactment of autonomous agents and avatars in the web-based social collaborative virtual environment of REVERIE that supports natural, human-like behavior, physical interaction and engagement. Represented by avatars, users feel immersed in this virtual world in which they can meet and share experiences as in real life. Like the avatars, autonomous agents that may act in this world are capable of demonstrating human-like non-verbal behavior and facilitate social interaction. We describe how reasoning components of the REVERIE system connect and cooperatively control autonomous agents and avatars representing a user

    Vers des Agents Conversationnels Animés dotés d'émotions et d'attitudes sociales

    No full text
    International audienceIn this article, we propose an architecture of a socio-affective Embodied Conversational Agent (ECA). The different computational models of the architecture enable an ECA to express emotions and social attitudes during an interaction with a user. Based on corpora of actors expressing emotions, models have been defined to compute the emotional facial expressions of an ECA and the characteristics of its corporal movements. A user-perceptive approach has been used to design models to define how an ECA should adapt its non-verbal behavior according to the social attitude the ECA wants to display and the behavior of its interlocutor. The emotions and the social attitudes to express are computed by cognitive models presented in this article.Dans cet article, nous proposons une architecture d'un Agent Conversationnel Animé (ACA) socio-affectif. Les différents modèles computationnels sous-jacents à cette architecture, permettant de donner la capacité à un ACA d'exprimer des émotions et des attitudes sociales durant son interaction avec l'utilisateur, sont présentés. A partir de corpus d'individus exprimant des émotions, des modèles permettant de calculer l'expression faciale émotionnelle d'un ACA ainsi que les caractéristiques de ses mouvements du corps ont été définis. Fondés sur une approche centrée sur la perception de l'utilisateur, des modèles permettant de calculer comment un ACA doit adapter son comportement non-verbal suivant l'attitude sociale qu'il souhaite exprimer et suivant le comportement de son interlocuteur ont été construits. Le calcul des émotions et des attitudes sociales à exprimer est réalisé par des modèles cognitifs présentés dans cet article

    A framework for human-like behavior in an immersive virtual world

    Get PDF
    Just as readers feel immersed when the story-line adheres to their experiences, users will more easily feel immersed in a virtual environment if the behavior of the characters in that environment adheres to their expectations, based on their life-long observations in the real world. This paper introduces a framework that allows authors to establish natural, human-like behavior, physical interaction and emotional engagement of characters living in a virtual environment. Represented by realistic virtual characters, this framework allows people to feel immersed in an Internet based virtual world in which they can meet and share experiences in a natural way as they can meet and share experiences in real life. Rather than just being visualized in a 3D space, the virtual characters (autonomous agents as well as avatars representing users) in the immersive environment facilitate social interaction and multi-party collaboration, mixing virtual with real

    Modélisation de comportements non-verbaux et d'attitudes sociales dans la simulation de groupes conversationnels

    No full text
    Embodied Conversational Agents are virtual characters which main purpose is to interact with a human user. They are used in various domains such as personal assistance, social training or video games for instance. In order to improve their capabilities, it is possible to give them the ability to produce human-like behaviors. The users, even if they are aware that they interact with a machine, are still capable of analyzing and identifying social behaviors through the signals produced by these virtual characters. The research in Embodied Conversational Agents has focused for a long time on the reproduction and recognition of emotions by virtual characters and now the focus is on the ability to express different social attitudes. These attitudes show a behavioral style and are expressed through different modalities of the body, like the facial expressions, the gestures or the gazes for instance. We proposed a model that allows an agent to produce different nonverbal behaviors expressing different social attitudes in a conversation. The whole set of behaviors produced by our model allows a goup of agents animated by it to simulate a conversation, without any verbal content. Two evaluations of the model were conducted, one on the Internet and one in a Virtual Reality environment, to verify that the attitudes produced are well recognizedLes Agents Conversationnels Animés sont des personnages virtuels dont la fonction principale est d'interagir avec l'utilisateur. Ils sont utilisés dans différents domaines tels que l'assistance personnelle, l'entrainement social ou les jeux vidéo et afin d'améliorer leur potentiel, il est possible de leur donner la capacité d'exprimer des comportements similaires à ceux des humains. Les utilisateurs, conscient d'interagir avec une machine, sont tout de même capable d'analyser et d'identifier des comportements sociaux à travers les signaux émis par les agents. La recherche en ACA s'est longtemps intéressée aux mécanismes de reproduction et de reconnaissance des émotions au sein de ces personnages virtuels et maintenant l'intérêt se porte sur la capacité d'exprimer différentes attitudes sociales. Ces attitudes reflètent un style comportemental et s'expriment à travers différentes modalités du corps comme les expressions faciales, les regards ou les gestes par exemple. Nous avons proposé un modèle permettant à un agent de produire différents comportements non-verbaux traduisant l'expression d'attitudes sociales dans une conversation. L'ensemble des comportements générés par notre modèle permettent à un groupe d'agents animés par celui-ci de simuler une conversation, sans tenir compte du contenu verbal. Deux évaluations du modèle ont été conduites, l'une sur Internet et l'autre dans un environnement de réalité virtuelle, afin de vérifier que les attitudes étaient bien reconnues

    Model of nonverbal behaviors expressing social attitudes in the simulation of conversational groups

    No full text
    Les Agents Conversationnels Animés sont des personnages virtuels dont la fonction principale est d'interagir avec l'utilisateur. Ils sont utilisés dans différents domaines tels que l'assistance personnelle, l'entrainement social ou les jeux vidéo et afin d'améliorer leur potentiel, il est possible de leur donner la capacité d'exprimer des comportements similaires à ceux des humains. Les utilisateurs, conscient d'interagir avec une machine, sont tout de même capable d'analyser et d'identifier des comportements sociaux à travers les signaux émis par les agents. La recherche en ACA s'est longtemps intéressée aux mécanismes de reproduction et de reconnaissance des émotions au sein de ces personnages virtuels et maintenant l'intérêt se porte sur la capacité d'exprimer différentes attitudes sociales. Ces attitudes reflètent un style comportemental et s'expriment à travers différentes modalités du corps comme les expressions faciales, les regards ou les gestes par exemple. Nous avons proposé un modèle permettant à un agent de produire différents comportements non-verbaux traduisant l'expression d'attitudes sociales dans une conversation. L'ensemble des comportements générés par notre modèle permettent à un groupe d'agents animés par celui-ci de simuler une conversation, sans tenir compte du contenu verbal. Deux évaluations du modèle ont été conduites, l'une sur Internet et l'autre dans un environnement de réalité virtuelle, afin de vérifier que les attitudes étaient bien reconnues.Embodied Conversational Agents are virtual characters which main purpose is to interact with a human user. They are used in various domains such as personal assistance, social training or video games for instance. In order to improve their capabilities, it is possible to give them the ability to produce human-like behaviors. The users, even if they are aware that they interact with a machine, are still capable of analyzing and identifying social behaviors through the signals produced by these virtual characters. The research in Embodied Conversational Agents has focused for a long time on the reproduction and recognition of emotions by virtual characters and now the focus is on the ability to express different social attitudes. These attitudes show a behavioral style and are expressed through different modalities of the body, like the facial expressions, the gestures or the gazes for instance. We proposed a model that allows an agent to produce different nonverbal behaviors expressing different social attitudes in a conversation. The whole set of behaviors produced by our model allows a goup of agents animated by it to simulate a conversation, without any verbal content. Two evaluations of the model were conducted, one on the Internet and one in a Virtual Reality environment, to verify that the attitudes produced are well recognize

    Towards the Generation of Expressive Co-Speech Gestures

    No full text
    International audienc

    A computational model of social attitude effects on the nonverbal behavior for a relational agent

    No full text
    International audienceThe relations we have with others influence in a very subtle way our body gestures. These are cues that are used in an interaction in an unconscious process to communicate and understand an attitude. Depending on the gestures and facial signals one is displaying, a social attitude can be perceived. In this paper, we propose to model socio-emotional agents and how they can express such an attitude in a dyadic interaction. The focus will be on generating the nonverbal behavior of the virtual agent given the social attitude it wants to convey. Depending on the nature of the relation an agent has with someone else, its role and desires, it should display a different attitude and therefore it should display different nonverbal behaviors. In this paper, we propose a computational model based on the findings in Human and Social Sciences on the correspondence between social attitudes and nonverbal behaviors. This computational model is used to select the proper behavior of an agent during an interaction depending on the social attitudes it wants to convey and depending on its gender
    corecore